"large language model" meaning in English

See large language model in All languages combined, or Wiktionary

Noun

Forms: large language models [plural]
Head templates: {{en-noun|head=large language model}} large language model (plural large language models)
  1. (machine learning) A type of neural network specialized in language, typically including billions of parameters. Wikidata QID: Q115305900 Synonyms: LLM Hypernyms: LM, language model, model Derived forms: L.L.M., LLM Related terms: pablumese Coordinate_terms: SLM, small language model Translations (Translations): 大型语言模型 (Chinese Mandarin), suuri kielimalli (Finnish), grand modèle de langage [masculine] (French), Large Language Model [neuter] (German), großes Sprachmodell [neuter] (German), nagy nyelvi modell (Hungarian), 大規模言語モデル (Japanese), 대형 언어 모델 (daehyeong eoneo model) (Korean), gichi inwewin wendad (Ojibwe), duży model językowy (Polish), modelo extenso de lenguaje [masculine] (Spanish), modelo de lenguaje de gran tamaño [masculine] (Spanish), modelo de lenguaje grande [masculine] (Spanish), gran modelo de lenguaje [masculine] (Spanish), modelo de lenguaje a gran escala [masculine] (Spanish), modelo lingüístico de gran tamaño [masculine] (Spanish), mô hình ngôn ngữ lớn (Vietnamese)

Inflected forms

Alternative forms

{
  "forms": [
    {
      "form": "large language models",
      "tags": [
        "plural"
      ]
    }
  ],
  "head_templates": [
    {
      "args": {
        "head": "large language model"
      },
      "expansion": "large language model (plural large language models)",
      "name": "en-noun"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "noun",
  "senses": [
    {
      "categories": [
        {
          "kind": "other",
          "name": "English entries with incorrect language header",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Entries with translation boxes",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Pages with 1 entry",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Pages with entries",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Finnish translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with French translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with German translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Hungarian translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Japanese translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Korean translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Mandarin translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Ojibwe translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Polish translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Spanish translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Terms with Vietnamese translations",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "langcode": "en",
          "name": "Linguistics",
          "orig": "en:Linguistics",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "langcode": "en",
          "name": "Machine learning",
          "orig": "en:Machine learning",
          "parents": [],
          "source": "w"
        }
      ],
      "coordinate_terms": [
        {
          "word": "SLM"
        },
        {
          "word": "small language model"
        }
      ],
      "derived": [
        {
          "word": "L.L.M."
        },
        {
          "word": "LLM"
        }
      ],
      "examples": [
        {
          "bold_text_offsets": [
            [
              56,
              76
            ],
            [
              350,
              370
            ],
            [
              350,
              371
            ]
          ],
          "ref": "2022 April 15, Steven Johnson, “A.I. Is Mastering Language. Should We Trust What It Says?”, in The New York Times, →ISSN, archived from the original on 16 Jul 2022:",
          "text": "GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              11,
              31
            ]
          ],
          "ref": "2023, Mohak Agarwal, Generative AI for Entrepreneurs in a Hurry, Notion Press, →ISBN:",
          "text": "GPT-2 is a large language model with 1.5 billion parameters, trained on a dataset of 8 million web pages scraped from the internet.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              175,
              196
            ]
          ],
          "ref": "2023 May 20, John Naughton, “When the tech boys start asking for new regulations, you know something’s up”, in The Observer, →ISSN, archived from the original on 03 Jun 2023:",
          "text": "Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              0,
              21
            ]
          ],
          "ref": "2025 April 13, Stephen Ornes, “Small Language Models Are the New Rage, Researchers Say. Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools”, in Wired, archived from the original on 16 Apr 2025:",
          "text": "Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters” […] With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost […] huge computational resources […] energy hogs […] In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "A type of neural network specialized in language, typically including billions of parameters."
      ],
      "hypernyms": [
        {
          "word": "LM"
        },
        {
          "word": "language model"
        },
        {
          "word": "model"
        }
      ],
      "id": "en-large_language_model-en-noun-en:Q115305900",
      "links": [
        [
          "machine learning",
          "machine learning"
        ],
        [
          "neural network",
          "neural network"
        ],
        [
          "language",
          "language"
        ],
        [
          "billion",
          "billion"
        ],
        [
          "parameter",
          "parameter"
        ]
      ],
      "qualifier": "machine learning",
      "raw_glosses": [
        "(machine learning) A type of neural network specialized in language, typically including billions of parameters."
      ],
      "related": [
        {
          "word": "pablumese"
        }
      ],
      "senseid": [
        "en:Q115305900"
      ],
      "synonyms": [
        {
          "word": "LLM"
        }
      ],
      "translations": [
        {
          "code": "cmn",
          "lang": "Chinese Mandarin",
          "lang_code": "cmn",
          "sense": "Translations",
          "word": "大型语言模型"
        },
        {
          "code": "fi",
          "lang": "Finnish",
          "lang_code": "fi",
          "sense": "Translations",
          "word": "suuri kielimalli"
        },
        {
          "code": "fr",
          "lang": "French",
          "lang_code": "fr",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "grand modèle de langage"
        },
        {
          "code": "de",
          "lang": "German",
          "lang_code": "de",
          "sense": "Translations",
          "tags": [
            "neuter"
          ],
          "word": "Large Language Model"
        },
        {
          "code": "de",
          "lang": "German",
          "lang_code": "de",
          "sense": "Translations",
          "tags": [
            "neuter"
          ],
          "word": "großes Sprachmodell"
        },
        {
          "code": "hu",
          "lang": "Hungarian",
          "lang_code": "hu",
          "sense": "Translations",
          "word": "nagy nyelvi modell"
        },
        {
          "code": "ja",
          "lang": "Japanese",
          "lang_code": "ja",
          "sense": "Translations",
          "word": "大規模言語モデル"
        },
        {
          "code": "ko",
          "lang": "Korean",
          "lang_code": "ko",
          "roman": "daehyeong eoneo model",
          "sense": "Translations",
          "word": "대형 언어 모델"
        },
        {
          "code": "oj",
          "lang": "Ojibwe",
          "lang_code": "oj",
          "sense": "Translations",
          "word": "gichi inwewin wendad"
        },
        {
          "code": "pl",
          "lang": "Polish",
          "lang_code": "pl",
          "sense": "Translations",
          "word": "duży model językowy"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "modelo extenso de lenguaje"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "modelo de lenguaje de gran tamaño"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "modelo de lenguaje grande"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "gran modelo de lenguaje"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "modelo de lenguaje a gran escala"
        },
        {
          "code": "es",
          "lang": "Spanish",
          "lang_code": "es",
          "sense": "Translations",
          "tags": [
            "masculine"
          ],
          "word": "modelo lingüístico de gran tamaño"
        },
        {
          "code": "vi",
          "lang": "Vietnamese",
          "lang_code": "vi",
          "sense": "Translations",
          "word": "mô hình ngôn ngữ lớn"
        }
      ],
      "wikidata": [
        "Q115305900"
      ]
    }
  ],
  "word": "large language model"
}
{
  "derived": [
    {
      "word": "L.L.M."
    },
    {
      "word": "LLM"
    }
  ],
  "forms": [
    {
      "form": "large language models",
      "tags": [
        "plural"
      ]
    }
  ],
  "head_templates": [
    {
      "args": {
        "head": "large language model"
      },
      "expansion": "large language model (plural large language models)",
      "name": "en-noun"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "noun",
  "related": [
    {
      "word": "pablumese"
    }
  ],
  "senses": [
    {
      "categories": [
        "English countable nouns",
        "English entries with incorrect language header",
        "English lemmas",
        "English multiword terms",
        "English nouns",
        "English terms with quotations",
        "Entries with translation boxes",
        "Pages with 1 entry",
        "Pages with entries",
        "Terms with Finnish translations",
        "Terms with French translations",
        "Terms with German translations",
        "Terms with Hungarian translations",
        "Terms with Japanese translations",
        "Terms with Korean translations",
        "Terms with Mandarin translations",
        "Terms with Ojibwe translations",
        "Terms with Polish translations",
        "Terms with Spanish translations",
        "Terms with Vietnamese translations",
        "Translation table header lacks gloss",
        "en:Linguistics",
        "en:Machine learning"
      ],
      "coordinate_terms": [
        {
          "word": "SLM"
        },
        {
          "word": "small language model"
        }
      ],
      "examples": [
        {
          "bold_text_offsets": [
            [
              56,
              76
            ],
            [
              350,
              370
            ],
            [
              350,
              371
            ]
          ],
          "ref": "2022 April 15, Steven Johnson, “A.I. Is Mastering Language. Should We Trust What It Says?”, in The New York Times, →ISSN, archived from the original on 16 Jul 2022:",
          "text": "GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              11,
              31
            ]
          ],
          "ref": "2023, Mohak Agarwal, Generative AI for Entrepreneurs in a Hurry, Notion Press, →ISBN:",
          "text": "GPT-2 is a large language model with 1.5 billion parameters, trained on a dataset of 8 million web pages scraped from the internet.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              175,
              196
            ]
          ],
          "ref": "2023 May 20, John Naughton, “When the tech boys start asking for new regulations, you know something’s up”, in The Observer, →ISSN, archived from the original on 03 Jun 2023:",
          "text": "Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance.",
          "type": "quotation"
        },
        {
          "bold_text_offsets": [
            [
              0,
              21
            ]
          ],
          "ref": "2025 April 13, Stephen Ornes, “Small Language Models Are the New Rage, Researchers Say. Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools”, in Wired, archived from the original on 16 Apr 2025:",
          "text": "Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters” […] With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost […] huge computational resources […] energy hogs […] In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "A type of neural network specialized in language, typically including billions of parameters."
      ],
      "hypernyms": [
        {
          "word": "LM"
        },
        {
          "word": "language model"
        },
        {
          "word": "model"
        }
      ],
      "links": [
        [
          "machine learning",
          "machine learning"
        ],
        [
          "neural network",
          "neural network"
        ],
        [
          "language",
          "language"
        ],
        [
          "billion",
          "billion"
        ],
        [
          "parameter",
          "parameter"
        ]
      ],
      "qualifier": "machine learning",
      "raw_glosses": [
        "(machine learning) A type of neural network specialized in language, typically including billions of parameters."
      ],
      "senseid": [
        "en:Q115305900"
      ],
      "synonyms": [
        {
          "word": "LLM"
        }
      ],
      "wikidata": [
        "Q115305900"
      ]
    }
  ],
  "translations": [
    {
      "code": "cmn",
      "lang": "Chinese Mandarin",
      "lang_code": "cmn",
      "sense": "Translations",
      "word": "大型语言模型"
    },
    {
      "code": "fi",
      "lang": "Finnish",
      "lang_code": "fi",
      "sense": "Translations",
      "word": "suuri kielimalli"
    },
    {
      "code": "fr",
      "lang": "French",
      "lang_code": "fr",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "grand modèle de langage"
    },
    {
      "code": "de",
      "lang": "German",
      "lang_code": "de",
      "sense": "Translations",
      "tags": [
        "neuter"
      ],
      "word": "Large Language Model"
    },
    {
      "code": "de",
      "lang": "German",
      "lang_code": "de",
      "sense": "Translations",
      "tags": [
        "neuter"
      ],
      "word": "großes Sprachmodell"
    },
    {
      "code": "hu",
      "lang": "Hungarian",
      "lang_code": "hu",
      "sense": "Translations",
      "word": "nagy nyelvi modell"
    },
    {
      "code": "ja",
      "lang": "Japanese",
      "lang_code": "ja",
      "sense": "Translations",
      "word": "大規模言語モデル"
    },
    {
      "code": "ko",
      "lang": "Korean",
      "lang_code": "ko",
      "roman": "daehyeong eoneo model",
      "sense": "Translations",
      "word": "대형 언어 모델"
    },
    {
      "code": "oj",
      "lang": "Ojibwe",
      "lang_code": "oj",
      "sense": "Translations",
      "word": "gichi inwewin wendad"
    },
    {
      "code": "pl",
      "lang": "Polish",
      "lang_code": "pl",
      "sense": "Translations",
      "word": "duży model językowy"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "modelo extenso de lenguaje"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "modelo de lenguaje de gran tamaño"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "modelo de lenguaje grande"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "gran modelo de lenguaje"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "modelo de lenguaje a gran escala"
    },
    {
      "code": "es",
      "lang": "Spanish",
      "lang_code": "es",
      "sense": "Translations",
      "tags": [
        "masculine"
      ],
      "word": "modelo lingüístico de gran tamaño"
    },
    {
      "code": "vi",
      "lang": "Vietnamese",
      "lang_code": "vi",
      "sense": "Translations",
      "word": "mô hình ngôn ngữ lớn"
    }
  ],
  "word": "large language model"
}

Download raw JSONL data for large language model meaning in English (6.5kB)


This page is a part of the kaikki.org machine-readable English dictionary. This dictionary is based on structured data extracted on 2026-01-25 from the enwiktionary dump dated 2026-01-01 using wiktextract (f492ef9 and 9905b1f). The data shown on this site has been post-processed and various details (e.g., extra categories) removed, some information disambiguated, and additional data merged from other sources. See the raw data download page for the unprocessed wiktextract data.

If you use this data in academic research, please cite Tatu Ylonen: Wiktextract: Wiktionary as Machine-Readable Structured Data, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC), pp. 1317-1325, Marseille, 20-25 June 2022. Linking to the relevant page(s) under https://kaikki.org would also be greatly appreciated.